entailment step
Entailment Tree Explanations via Iterative Retrieval-Generation Reasoner
Ribeiro, Danilo, Wang, Shen, Ma, Xiaofei, Dong, Rui, Wei, Xiaokai, Zhu, Henry, Chen, Xinchi, Huang, Zhiheng, Xu, Peng, Arnold, Andrew, Roth, Dan
Large language models have achieved high performance on various question answering (QA) benchmarks, but the explainability of their output remains elusive. Structured explanations, called entailment trees, were recently suggested as a way to explain and inspect a QA system's answer. In order to better generate such entailment trees, we propose an architecture called Iterative Retrieval-Generation Reasoner (IRGR). Our model is able to explain a given hypothesis by systematically generating a step-by-step explanation from textual premises. The IRGR model iteratively searches for suitable premises, constructing a single entailment step at a time. Contrary to previous approaches, our method combines generation steps and retrieval of premises, allowing the model to leverage intermediate conclusions, and mitigating the input size limit of baseline encoder-decoder models. We conduct experiments using the EntailmentBank dataset, where we outperform existing benchmarks on premise retrieval and entailment tree generation, with around 300% gain in overall correctness.
- North America > Dominican Republic (0.04)
- Europe > Italy > Tuscany > Florence (0.04)
- North America > United States (0.04)
- (5 more...)
Explaining Answers with Entailment Trees
Dalvi, Bhavana, Jansen, Peter, Tafjord, Oyvind, Xie, Zhengnan, Smith, Hannah, Pipatanangkura, Leighanna, Clark, Peter
Our goal, in the context of open-domain textual question-answering (QA), is to explain answers by not just listing supporting textual evidence ("rationales"), but also showing how such evidence leads to the answer in a systematic way. If this could be done, new opportunities for understanding and debugging the system's reasoning would become possible. Our approach is to generate explanations in the form of entailment trees, namely a tree of entailment steps from facts that are known, through intermediate conclusions, to the final answer. To train a model with this skill, we created ENTAILMENTBANK, the first dataset to contain multistep entailment trees. At each node in the tree (typically) two or more facts compose together to produce a new conclusion. Given a hypothesis (question + answer), we define three increasingly difficult explanation tasks: generate a valid entailment tree given (a) all relevant sentences (the leaves of the gold entailment tree), (b) all relevant and some irrelevant sentences, or (c) a corpus. We show that a strong language model only partially solves these tasks, and identify several new directions to improve performance. This work is significant as it provides a new type of dataset (multistep entailments) and baselines, offering a new avenue for the community to generate richer, more systematic explanations.
- North America > United States > Arizona > Pima County > Tucson (0.14)
- North America > United States > Washington > King County > Seattle (0.04)